Search Results: "zigo"

17 October 2013

Thomas Goirand: OpenStack Havana 2013.2 Debian packages available

OpenStack upstream was released today. Thanks to the release team and a big up to TTX for his work. By the time you read this, probably all of my uploads have reached your local Debian mirror. Please try Havana using either Sid from any Debian mirror, or using the Wheezy backports available here: deb http://havana.pkgs.enovance.com/debian havana main
deb http://archive.gplhost.com/debian havana-backports main Yes, you will need *both* repositories. This is unofficial, though these are the exact same packages as in Sid, just rebuilt for Wheezy. On the package side, here s what is new: - All packages who needs it can now be configured through debconf for the RabbitMQ settings. This is on top of what was already available for Grizzly, which is automated configuration for: keystone auth token, the database, the API endpoints and much more. (remember: this is fully optional, you can always use the non-interactive mode ) - All Quantum pluggin packages have been removed, and now everything is self-contained in the neutron-common package. The selection of which plugin to use is done directly using the core_plugin= directive in /etc/neutron/neutron.conf. This will also control the init.d script of neutron-server, so that it loads the corresponding ini file in /etc/neutron/plugins. The plugin selection is done through Debconf, so that users don t have to write the full path of the plugin class, which is (for most) very cryptic (am I the only one who thinks that writing neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 in a configuration file is user friendly?). - All of the package descriptions and Debconf templates have been reviewed by the Debian internationalization team, and most strings are translated in Czech, Danish, French, Italian, Japanese and Russian (sometimes even more) for almost all packages (thanks everyone!). I d like to publicly thanks eNovance for sponsoring my packaging work, and Mehdi Abaakouk for his work on our CI with the tempest tests. Happy Havana release testing,
Please report bugs through the Debian BTS.

12 October 2013

Thomas Goirand: Jenkins remote build trigger (eg: from git push) tokens

After upgrading my Sid virtual machine hosting my Jenkins, build after git push stopped working. This is because version 1.503 and above require an auth token for triggering the build. Since it took me some time to search the web, I ve decided to blog about it to save time to other Jenkins users. Under each project configuration screen, in the Build Triggers section, tick the Trigger builds remotely (e.g., from scripts) option. Then enter a random token (I used a password generator for it). Then in your post-receive hook, use what s below:
wget -q --no-check-certificate https://<jenkins-url>/job/heat/build?token=<your-token> -O -

8 October 2013

Thomas Goirand: OpenStack 2013.2~rc1, aka Havana, fully available in Debian Experimental

Announcement After a very long work, over the course of 4 months, I have finished packaging the first RC1 of OpenStack. This comes on time, just 9 days before the official Havana release. Please do try this RC1 before the official 2013.2, code name Havana, is released, and hopefully uploaded to Debian. All of the packages are available from Debian Experimental, keeping Grizzly in Sid. However, there is also some private repositories that I maintain, holding Wheezy backports: deb http://havana.pkgs.enovance.com/debian havana main deb http://archive.gplhost.com/debian havana-backports main The first repository holds the packages maintained within the Alioth group. These are built directly from my Jenkins machine, on each git push. The 2nd repository is holding backports from Sid to Wheezy for the packages that I don t actively maintain (though a lot of them are in the Python module team, in which I do a lot of packaging and updates as well). A few numbers A few numbers about this now. I had to work on 145 source packages: at least backport them to Wheezy, and push them in the GPLHost archive repository above. This generates 360 binary packages. Out of these, I maintain 77 source packages within the Alioth OpenStack group, generating 209 .deb files. That s a lot of stuff to deal with (and I feel sometimes a bit dizzy about it). While OpenStack is a big jigsaw puzzle to solve for the users, it is even more for someone who has to deal with all the (sometimes buried in the the code) Python dependencies. I hope others will come and join me in this packaging effort, since over the time, there s more and more work to be done, as the project grows. Note that most of the work is unfortunately done on packaging (and updating) the Python dependencies, working on the packages themselves is done last, at the end of the cycle. Other things not packaged (yet) Before the release (and the forthcoming Hongkong summit on the 5th of November), I hope to be able to finish packaging TripleO. TripleO is in fact OpenStack on OpenStack, which works with nova-baremetal. I have no idea how to test or install this, though it sounds like a lot of fun. There are 6 source packages that need to be done. Also, pending on the FTP masters NEW queue, is Trove: Database as a Service. I hope this one can get through soon. There is, also, Marconi, which is an incubated project for a new message queuing service, which probably will replace RabbitMQ (I m not sure yet what it does, and I will be happy to hear about it at the summit). Lastly, there s Ironic, which will at some point, replace nova-baremetal. That is, it does cloud computing over bare metal, without virtualization. All of these new projects are still in an incubation stage, and are not part of the official release yet. Though, I have learned over the course of this past year, that with OpenStack, it s never early enough to start the packaging work. Thanks to my sponsor! Please note that all of this wouldn t be possible without eNovance sponsoring my packaging work. A big up to all of them for supporting and loving Debian! You guys rox. Also a special thanks to Mehdi / Sileth, for his work testing everything with the Tempest functional tests and the CI platform.

Thomas Goirand: My old 1024 bits key is dead, please use 0xAC6B43FE

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
Hi,
I am not using my old GPG key, 0x98EF9A49 anymore. My new key, using
4096 SHA2 256,
with fingerprint:
A0B1 A9F3 5089 5613 0E7A  425C D416 AD15 AC6B 43FE
has replaced the old one in the Debian keyring. Please don't encrypt
message to me using the old key anymore.
Since the idea is that we shouldn't trust 1024 bits keys anymore, I'm
not signing this message with the old key, but only with the new one,
which has gathered enough signatures from Debian Developers (more than a
dozen).
Thomas Goirand (zigo)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
iQIcBAEBCAAGBQJSVC02AAoJENQWrRWsa0P+3wAP/i2ORGgXMoQVtjoUNX+x/Ovz
yoNSLztmih4pOLw9+qHJfM+OkBKUPwrkyjgBWkwD2IxoM2WRgNZaY5q/jBEaMVgq
psegqAm99zkX0XJTIYfqwOZFA1JLWMi1uLJQO71j0tkJWPzBSa6Jhai81X89HKgq
PqQXver+WbORHkYGIWwBvwj+VbPZ+ssY7sjbdWTaiMcaYjzLQR4s994FOFfTWH8G
5zLdwj+lD/+tBH90qcB9ETlbSE1WG4zBwz5f4++FcPYVUfBPosE/hcyhIp6p3SPK
8F6B51pUvqwRe52unZcoA30gEtlz+VNHGQ3yF3T1/HPlfkyysAypnZOw0md6CFv8
oIgsT+JBXVavfxxAJtemogyAQ/DPBEGuYmr72SSav+05BluBcK8Oevt3tIKnf7Q5
lPTs7lxGBKI0kSxKttm+JcDNkm70+Olh6bwh2KUPBSyVw0Sf6fmQdJt97tC4q7ky
945l42IGTOSY0rqdmOgCRu8Q5W1Ela9EDZN2jPmPu4P6nzqIRHUw3gS+YBeF1i+H
/2jw4yXSXSYQ+fVWJqNb5R2raR37ytNWcZvZvt4gDxBWRqnaK+UTN6tdF323HKmr
V/67+ewIhFtH6a9W9mPakyfiHqoK6QOyOhdjQIzL+g26QMrjJdOEWkqzvuIboGsw
OnyYVaKsZSFoKBs0kOFw
=qjaO
-----END PGP SIGNATURE-----

5 August 2013

Thomas Goirand: Why using Mailman when MLMMJ is available?

Daniel Pocock just wrote a blog post about how to setup mailman for virtual hosting. Well, it strikes me that mailman is a bad solution, for many reasons. First, it imposes you to use @lists.example.com lists instead of @example.com. I m not sure if that is mandatory, but at least I ve seen only mailman setups done this way. At least, it is how mostly everyone does the setup. I think that s really ugly. Any mailbox should be fine, IMO. What I found particularly lame about Mailman, is that these issues (plus the ones which Daniel listed) have been known for YEARS, though nobody came up with a patch to fix it. And it s really not hard. Why do I know it? Well, because I ve been using MLMMJ for years without such troubles. The current situation where everyone is with mailman is really LAME. Not only MLMMJ is better because it is easier to install and supports virtual hosting out of the box, but also it is written in C and is much faster than Mailman. MLMMJ has been used in high traffic lists like for SUSE and Gentoo. The fact that some major sites decided to do the switch isn t a proof that MLMMJ is perfect, but is a good indication that it at least works well without too much trouble. Also, with mailman, you have to use the subject lines to control your list subscriptions and send command to it. No need to do that with MLMMJ, because everything is controled with the mailbox extension. For example, mylist+subscribe@example.com can be used to subscribe (instead of mylist-requests@lists.example.com, then fill-in the subject line with mailman). So, if you don t like some of the (bad) limitations of Mailman, would like to test something faster, and easier to setup, have a try with MLMMJ (see mlmmj.org for more details, and see the README.Debian inside my package).

29 July 2013

Thomas Goirand: OpenStack Havana b2 available, openstack-debian-images approved

I have finished preparing the beta 2 of the next release of OpenStack. It is currently only available from out Git on Alioth (in /git/openstack), and directly from my jenkins repository, which creates Wheezy backports for it:
deb ftp://havana.pkgs.enovance.com/debian havana main
deb http://archive.gplhost.com/debian havana-backports main
As for every OpenStack release, a large number of Python modules needed to be packaged and are waiting in the FTP master NEW queue to be approved: oslo-sphinx, python-django-discover-runner, python-hacking, python-jsonrpclib, python-lesscpy, python-neutronclient, python-nosehtmloutput, python-requestbuilder, python-termcolor, sphinxcontrib-httpdomain and sphinxcontrib-pecanwsme. Let s hope that they will be approved before the next beta release in september (time where OpenStack Havana will be in feature freeze). The total number of packages maintained by the OpenStack team (on which I really am the only active maintainer for the moment ), there s 53 packages, plus these 11 packages waiting in the NEW queue. That s a big number of package, and I wouldn t mind some help One thing that annoyed all the community is that Quantum, the OpenStack network virtualization module, had to be renamed as Neutron, because of a trademark from Quantum (you probably remember the Quantum Fireball hard drives? Well, it s the same company ). Another good news is that my openstack-debian-images package has just been approved and landed in Sid. With it, you can automate the process of building a Debian image for OpenStack with a simple shell command (there s a man page that I wrote with it: read it if you need to build images). It is made of a single shell script to build the image, using kpartx, parted, mbr, debootstrap, extlinux and friends. I tried to keep it simple, not involving a huge amount of components. With the release of cloud-init 0.7.2-3, I have fixed a few bugs (3 important bugs, out of which 2 where RC bugs), thanks to contributions in the debian-cloud@lists.debian.org mailing list. This includes adding new init.d scripts, so we now have support for user data. This doesn t only benefits OpenStack images, but anyone willing to start virtual machines in the cloud (nowadays, every cloud implementation needs cloud-init installed in the virtual images). This means you can include a script in the metadata of the virtual machine you start it, and it will be executed at startup. If everything goes as planned (that is, no new RC bug), I will upload an update of cloud-init to backports in 5 days (there is already a version there, but it doesn t have the necessary init.d scripts to execute the user data scripts), and openstack-debian-images in 9 days. Then it will be possible to build OpenStack images with absolutely all the tools available from Wheezy (and backports). I hope to be able to discuss that during Debconf 13.

24 July 2013

Gergely Nagy: My git tagging convention

Zigo posted an article earlier, one which I disagree with strongly. First of all, something that has been in use ever since git was invented is hardly a new fashion. Something that is used by the very same person who invented git and by git itself, is hardly surprising to be found so widespread. Especially not when it is used as an example in git's very own documentation! Writing it off as something silly, because of the way GitHub works, without even trying to understand the reason behind it is just plain wrong. Just because GitHub names tarballs the way it does, does not mean that we should change the way we tag.For example, cgit (used by, among others, kernel.org), uses the tag name as-is for its tarballs, unlike GitHub which prepends the project name, so one will have to mangle the filename in a debian/watch file anyway, when downloading from a host that is not GitHub, therefore his arguments for a raw version-only tag are bogus, unless your upstream is using GitHub.But I do not want to criticise only, rather, to provide a reason why a prefix is used, and why I chose the "worst" prefix: the project name itself.One very strong reason to use a prefix, either v or any other prefix is to ease tab completion. If you have a tag that is a bare number, you type the first, press tab, and get a mixture of commits and tags. You can't easily tell your completion system that you want commits or tags. With a prefix outside of the hex range, which v is, you can do that, and that makes working with it a lot easier.Is it for convenience? Yes. But ask yourself this: how many times do you have to write a debian/watch file? Once per package. How much time do you spend looking for a tag, or a commit? A lot more. So which one is most important? The convenience of someone who has to work around the tarball naming of the hosting service once, or the developer who works with the software daily? Obviously the latter!But that is not all. I'm an advocate of prefixing the tag with the project name, actually, and I've been doing that for all new projects for a while now, and I'm not going to change that, because it serves a very practical purpose: if you have a parent repository, with a lot of submodules, if git tag tells you the project name too, that makes it much easier to navigate. I don't have to bake submodule support into my shell prompt (that would be costy), and I won't find it surprising that if I enter a subdirectory, git tag fails to list the version I know I'm working on. Just because I happened to end up in a submodule, which is something I end up doing often, as I have many repositories that have submodules, which in turn have other submodules, and so on and so forth through many layers.A raw version number as a tag name is insufficient for my needs, simple as that. And I, as upstream, don't care that whoever packages my thing, will have to add a line to a watch file. That only needs to be done once, while I work with tags and commits daily.I'm sorry, but calling this naming convention a disease just because GitHub's tarball naming is what it is, without even considering that there may be other reasons behind it than legacy, and that there are hosting sites outside of GitHub, is a mistake. Dear package maintainers, please do not annoy us upstreams with requests to cripple our daily work. Thank you.

Thomas Goirand: The v sikness is spreading

It seems to be a new fashion. Instead of tagging software with a normal version number, many upstream adds a one letter prefix. Instead of version 0.1.2, it becomes version v0.1.2. This sickness has spread all around in Github (to tell only about the biggest one), from one repository to the next, from one author to the next. It has consequences, because Github (and others) conveniently provides tarballs out of Git tags. Then the tarball names becomes packagename-v0.1.2.tar.gz instead of package packagename-0.1.2.tar.gz. I ve even seen worse, like tags called packagename-0.1.2. Then the tarball becomes packagename-packagename-0.1.2. Consequently, we have to go around a lot of problems like mangling in our debian/watch files and so on (probably the debian/gbp.conf if you use that ). This is particularly truth when upstream doesn t make tarball and only provides tags on github (which is really fine to me, but then tags have to be made in a logical way). Worse: I ve seen this v-prefixing disease as examples in some howtos. What s wrong with you guys? From where is coming this v sickness? Have you guys watch too much the v 2009 series on TV, and you are a fan of the visitors? How come a version number isn t just made of numbers? Or is this just a v like the virus of prefixing release names with a v ? So, if you are an upstream author, reading debian planet, with your software packaged in Debian, and caught the bad virus of prefixing your version numbers with a v, please give-up on that. Adding a v to your tags is meaningless anyway, and it s just annoying us downstream. Edit: Some people pointed to me some (IMO wrong) reasons why to prefix version numbers. My original post was only half serious, and responding with facts and common sense breaks the fun! :) Anyway, the most silly one being that Linus has been using it. I wont comment on that one, it s obvious that it s not a solid argumentation. Then the second one is for tab completion. Well, if your bash-completion script is broken, fix it so that it does what you need, rather than going around the problem by polluting your tags. Then the 3rd argument was if you were merging 2 repositories. First this never happened to me to merge 2 completely different repos, and I very much doubt that this is an operation that you have to do often. Second, if you merge the repositories, the tags are loosing all meanings, and I don t really think you will need them anyway. Then the last one would be working with submodules. I haven t done it, and that might be the only case where it makes sense, though this has nothing to do with prefixing with v (you would need a much smarter approach, like prefixing with project names, which in that case makes sense). So I stand by my post: prefixing with v makes no sense.

7 June 2013

Thomas Goirand: Compute node with 256 GB of RAM, 2CPU with 6 cores each (24 threads total)

Will that be enough? Let s load some VMs in that beast! :) too_much_ram_and_cpu

2 June 2013

Thomas Goirand: dtc-xentop: monitoring of CPU, I/O and network for your Xen VMs

What has always been annoying me with Xen, is that xentop is well a piece of shit! It just displays the current numbers of sectors or network bytes read / write. But as an administrator, what you care about, is to know which of your VM is taking all the resources, making your whole server starve. The number of sectors read/write since the VM has started is of very low importance. What you care about is to have an idea of the current transfer rate. And the same apply for networking. So, tonight, within few hours, I hacked a small python shell script using ncurses to do what I needed, which tells how much resources have been used over the last 5 seconds (and not since the VM started). This way, it is easy to know which VM is killing your server. dtc-xentop The script is adapted to my own needs only. Which means that it works only for DTC-Xen VMs of GPLHost. In my case, VM uses exactly 2 partitions, one for the filesystem, and one for the swap, which means that I display exactly that. I m sure it wouldn t be hard to adapt it so that it would work for all cases (which would mean finding out what device uses a VM and getting the statistics from /sys using that information, instead of determining it using the name of the VM). But I don t need it, so the script will stay this way. Before writing this tonight, I didn t know ncurses. Well, it s really not hard, especially in Python! It took me about 2 hours to write^Whack the script (cheating on the dtc-xen soap server which I already had available).

29 April 2013

Thomas Goirand: Jenkins: building debian packages after a git push (my 2cents of a howto)

The below is written in the hope it will be helpful for my fellow DDs. Why using build after push ? Simple answer: to save time, to always use a clean build environment, to automate more tests. Real answer: because you are lazy, and tired of always having to type these build commands, and that watching the IRC channel is more fun than watching the build process. Other less important answers: building takes some CPU time, and makes your computer run slower for other tasks. It is really nice that building doesn t consume CPU cycles on your workstation/laptop, and that a server does the work, not disturbing you while you are working. It is also super nice that it can maintain a Debian repository for you after a successful build, available for everyone to use and test, which would be harder to achieve on your work machine (which may be behind a router doing NAT, or even sometimes turned off, etc.). It s also kind of fun to have an IRC robot telling everyone when a build is successful, so that you don t have to tell them, they can see it and start testing your work. Install a SID box that can build with cowbuilder
Install Jenkins WARNING: Probably, before really installing, you should read what s bellow (eg: Securing Jenkins). Simply apt-get install jenkins from experimental (the SID version has some security issues, and has been removed from Wheezy, on a request of the maintainer). Normally, after installing jenkins, you can access it through:
http://<ip-of-your-server>:8080/
There is no auth by default, so anyone will be able to access your jenkins web GUI and start any script under the jenkins user (sic!). Jenkins auth
Before doing anything else, you have to enable Jenkins auth, otherwise, you have everything accessible from the outside, meaning that more or less, anyone browsing your jenkins server could be allowed to run any command. It might sound simple, but in fact, Jenkins auth is tricky to activate, and it is very easy to get yourself locked out, with no working web access. So here s the steps: 1. Click on Manage Jenkins then on Configure system 2. Check the enable security checkbox 3. Under security realm select Jenkins s own user database and leave allow users to sign up . Important: leave Anyone can do anything for the moment (otherwise, you will lock yourself out). 5. At the bottom of the screen, click on the SAVE button. 6. On the top right, click to login / create an account. Create yourself an account, and stay logged in. 7. Once logged-in, go back again in the Manage Jenkins -> Configure system , under security. 8. Switch to Project based matrix authentication strategy . Under User/group to add , enter the new login you ve just created, and click on Add . 9. Select absolutely all checkboxes for that user, so that you make yourself an administrator. 10. For the Anonymous user, for Job, check Read, Build and Workspace. For Overall select Read. 11. At the bottom of the screen, hit save again. Now, anonymous (eg: not logged-in) users should be able to see all projects, and be able to click on the build now button. Note that if you lock yourself out, the way to fix is to turn off Jenkins, edit config.xml, remove the useSecurity thing, and all what is in authorizationStragegy and securityRealm , then restart Jenkins. I had to do that multiple times until I had it right (as it isn t really obvious you got to leave Jenkins completely insecure when creating a new user). Securing Jenkins: Proxy Jenkins through apache to use it over SSL
When doing a quick $search-engine search, you will see lots of tutorials to use apache as a proxy, which seems to be the standard way to run Jenkins. Add the following to /etc/apache2/sites-available/default-ssl:
ProxyPass / http://localhost:8080/
ProxyPassReverse / http://localhost:8080/
ProxyRequests Off
Then perform the following commands on the shell:
htpasswd -c /etc/apache2/jenkins_htpasswd <your-jenkins-username>
a2enmod proxy
a2enmod proxy_http
a2enmod ssl
a2ensite default-ssl
a2dissite default
apache2ctl restart
Then disable access to the port 8080 of jenkins from outside:
iptables -I INPUT -d <ip-of-your-server> -p tcp --dport 8080 -j REJECT
Of course, this doesn t mean you shouldn t take the steps to activate Jenkins own authentication, which is disabled by default (sic!). Build a script to build packages in a cowbuilder I thought it was hard. In fact it was not. All together, this was kind of fun to hack. Yes hack. What I did yet another kind of 10km ugly shell script. The way to use it is simply: build-openstack-pkg <package-name>. On my build server, I have put that script in /usr/bin, so that it is accessible from the default path.Ugly, but it does the job! Jenkins build script for openstack At the end of the script, scan_repo() generates the necessary files for a Debian repository to work under /home/ftp. I use pure-ftpd to serve it. /home/ftp must be owned by jenkins:jenkins so that the build script can copy packages in it. This build script is by no mean state of the art, and in fact, it s quite hack-ish (so I m not particularly proud of it, but it does its job ). If I am showing it in this blog post, it is just to show an example of what can be done. It is left as an exercise to the reader to create another build script adapted to its own needs, and write something cleaner and more modular. Dependency building Let s say that you are using the Built-Using: field, and that package B needs to be built if package A changes. Well, Jenkins can be configured to do that. Simply edit the configuration of project B (you will find it, it s easy ). My use case: In my case, for building Glance, Heat, Keystone, Nova, Quantum, Cinder and Ceilometer, which are all components of Openstack, I have written a small (about 500 lines) library of shell functions, and an also small (90 lines) Makefile, which are packaged in openstack-pkg-tools (so Nova, Glance, etc. all build-depends on openstack-pkg-tools). The shell functions are included in each maintainer scripts (debian/*.config and debian/*.postinst mainly) to avoid having some pre-depends that would break debconf flow. The Makefile of openstack-pkg-tools is included in debian/rules of each packages. In such a case, trying to manage the build process by hand is boring and time consuming (spending your time watching the build process of package A, so that you can manually start the build of package B, then wait again ). But it is also error prone: it is easy to do a mistakes in the build order, you can forget to dpkg -i the new version of package A, etc. But that s not it. Probably at some point, you will want Jenkins to rebuild everything. Well, that s easy to do. Simply create a dummy project, and have other project to build after that one. The build steps could simply be: echo Dummy project as a shell script (I m not even sure that s needed ). Configuring git to start a build on each push In Jenkins, pass your mouse over the Build now URL. Well, we just need to wget that URL in your Alioth repository. A small drawing is better than long explanation:
for i in  ls /git/openstack  ; do
    echo "wget -q --no-check-certificate \
    https://<ip-of-your-server>/job/$ PROJ_NAME /build?delay=0sec \
    -O /dev/null" >/git/openstack/$ i /hooks/post-receive \
        && chmod 0770 /git/openstack/$ i /hooks/post-receive;
done
The chmod 0770 is necessary if you don t want every Alioth users to have access to your Jenkins server web interface and see an eventual htpassword protection that you can add to your jenkins box (I m not covering that, but it is fairly easy to add such protection). Note that all of the members of your Alioth group will then have access to this post-receive hook, containing the password of your htaccess, so you must trust everyone in your Alioth group to not do nasty things with your Jenkins. Bonus point: IRC robot If you would like to see the result of your build published on IRC, Jenkins can do that. Click on Manage Jenkins , then on Manage Plugins . Then click on available and check the box in front of IRC plugin . Go at the bottom of the screen and click on Add . Then check the box to restart Jenkins automatically. Once it is restarted, go again under Manage jenkins then Configure system . Select the IRC Notification and configure it to join the network and the channel you want. Click on Advanced to select the IRC nick name of your bot, and make sure you change the port (by default jenkins has 194, when IRC normally uses 6667). Be patient when waiting for the IRC robot to connect / disconnect, this can take some time. Now, for each jenkins Job, you can tick the IRC Notification option. Doing piuparts after build One nice thing with automated builds, is that most of the time, you don t need to wait starring at them. So you can add as many tests as you want, the Jenkins IRC robot will anyway let you know sooner or later the result of your build. So adding piuparts tests in the build script seems the correct thing to do. Though that is still on my todo, so maybe that will be for my next blog post.

4 April 2013

Thomas Goirand: Git packaging workflow

Seeing what has been posted recently in planet.d.o, I would like to share as well my thoughts and work-flow, and tell that I do agree with Joey Hess on many of his arguments. Especially when he tells that Debian fetishises upstream tarballs. We re in 2013, at the age of Internet, and more and more upstream authors are using Git, and more and more they don t care about releasing tarballs. I ve seen some upstream authors who simply stopped doing so completely, as a Git tag is really enough. I also fully agree than disk space and network speed isn t much of a problem these days. When there are tags available, I use the following debian/gbp.conf:
[DEFAULT]
upstream-branch = master
debian-branch = debian
upstream-tag = %(version)s
compression = xz
[git-buildpackage]
export-dir = ../build-area/
On many of my packages, I now just use Git tags from upstream if they are available. To make it more easy, I now nearly always use the following piece of code in my debian/rules files:
DEBVERS         ?= $(shell dpkg-parsechangelog   sed -n -e 's/^Version: //p')
VERSION         ?= $(shell echo '$(DEBVERS)'   sed -e 's/^[[:digit:]]*://' -e 's/[-].*//')
DEBFLAVOR       ?= $(shell dpkg-parsechangelog   grep -E ^Distribution:   cut -d" " -f2)
DEBPKGNAME      ?= $(shell dpkg-parsechangelog   grep -E ^Source:   cut -d" " -f2)
DEBIAN_BRANCH   ?= $(shell cat debian/gbp.conf   grep debian-branch   cut -d'=' -f2   awk ' print $1 ')
GIT_TAG         ?= $(shell echo '$(VERSION)'   sed -e 's/~/_/')
get-upstream-sources:
        git remote add upstream git://git.example.org/proj/foo.git   true
        git fetch upstream
        if ! git checkout master ; then \
                echo "No upstream branch: checking out" ; \
                git checkout -b master upstream/master ; \
        fi
        git checkout $(DEBIAN_BRANCH)
make-orig-file:
        if [ ! -f ../$(DEBPKGNAME)_$(VERSION).orig.tar.xz ] ; then \
                git archive --prefix=$(DEBPKGNAME)-$(GIT_TAG)/ $(GIT_TAG)   xz >../$(DEBPKGNAME)_$(VERSION).orig.tar.xz ; \
        fi
        [ ! -e ../build-area ] && mkdir ../build-area   true
        [ ! -e ../build-area/$(DEBPKGNAME)_$(VERSION).orig.tar.xz ] && cp ../$(DEBPKGNAME)_$(VERSION).orig.tar.xz ../build-area   true
Packaging a new upstream VERSION now means that I only have to edit the debian/changelog, do ./debian/rules get-upstream-source so that I get new commits and tags, then git merge -X theirs VERSION to import the changes, then finally invoke ./debian/rules make-orig-file to create the orig.tar.xz. My debian branch is now ready for git-buildpackage. Note that the sed with the GIT_TAG thing is there because unfortunately, Git doesn t support the ~ char in tags, and that most of the time, upstream do not use _ in version numbers. Let s say upstream is releasing version 1.2.3rc1, then I simply do git tag 1.2.3_rc1 1.2.3rc1 so that I have a new tag which points to the same commit as 1.2.3rc1, but that can be used for the Debian 1.2.3~rc1-1 release and the make-orig-file. All this might looks overkill at first, but in fact it is really convenient and efficient. Also, even though there is a master branch above, it isn t needed to build the package. Git is smarter than this, so even if you haven t checked out upstream master branch from the upstream remote, make-orig-file and git-buildpackage will simply continue to work. Which is cool, because this means you can store a single branch on Alioth (which is what I do).

30 March 2013

Thomas Goirand: An afternoon of fun hacks, booting Debian

Step one: building OpenRC, and force it with a big hammer to replace sysv-rc. openrc A few minutes later, with even more hacks, we have a more decent boot process which uses Gentoo boot scripts (amazingly, most of them do work out of the box!): openrc_take2 Notice the udev script, which is hacked from the Debian sysv-rc, still has the color scheme of Debian, while the other scripts are just drop-ins from Gentoo. Of course, this is only a big hack to have the above. There is only so much you can do in a 4 hours hacking session. It will need more serious work to make it a viable solution (like finding a way to upgrade smoothly and allow the first reboot to shutdown processes which aren t running using cgroup, converting existing init scripts automatically, hooking into update-rc.d, etc.). Though the proof of concept is there: the rc-status command works, we have cgroups working, and so on. Thanks to Patrick Lauer for spending with me this fun afternoon, hacking OpenRC in SID.

1 March 2013

Thomas Goirand: Openstack Grizzly (2013.1~g3) available

This post is just a status update for the Openstack packaging, after the next version froze last week. The Openstack bi-annual summit of next April will take place this year in Portland, Oregon, and if everything goes as plan, Grizzly will be released just before the summit. Grizzly will be out a bit before the next Ubuntu in April, as releases are following the one of Ubuntu. Openstack uses town names for it s release names: Austin, Bextar, Cactus (2011.1), Diablo (2011.2), Essex (2012.1), Folsom (2012.2), Grizzly (2013.1). I started to seriously work on the Openstack packaging in October, and never stopped working on its packaging until now. Slowly, but surely, preparing all the packages and their dependency python modules. One package at a time, working all this every day. Folsom works now pretty well and can be used in production, and I maintain it for security (along with Essex, which is in Wheezy). Then Grizzly was frozen last week, on the 22nd of February, with the G3 release. As I already worked on packaging the G2 release in January, managing the packaging of G3 was fast. On late Sunday, I had a repository with Grizzly built, and its corresponding python (build-)dependencies. But while just building your own repository is easy, having all the dependencies in Debian is a lot more work. As of today, if I include all the python modules, I have (at least) touch around 50 packages in total when working on Openstack. Many of them were simply built from scratch. The only python dependency that needs an upgrade in Experimental, so that all dependencies are satisfied, is a new version of pep8. The rest of is new python modules that were not in Debian, and which are currently waiting in the NEW queue for ftp masters approval: python-pecan, python-tablib, python-wsme and websockify. Some of these python modules have been waiting there for a long time, like python-pecan (it s been waiting in the NEW queue for more than 35 days now), some like websockify and python-wsme have been uploaded only this week. I really hope it can be possible to have all of Grizzly in Debian before the next Openstack summit (this depends mainly on the ftp-masters). Note that I do not intend to apply security patches in Grizzly until it is released as the new Openstack stable. So use my private Grizzly repository as your own risk. I intend to fix this by working on some constant integration to have nightly builds, like many people are doing with Openstack. If you want to try it out: deb http://archive.gplhost.com/debian grizzly main

6 February 2013

Thomas Goirand: Openstack Folsom fully uploaded to Experimental

It s been a long time I wanted to blog about my recent work on the Openstack packaging. I finally can find a bit of time to do it. For those who don t know yet Openstack, it s a fairly recent (less than 3 years) cloud computing suite, which is becoming quite huge. If you plan on deploying a private cloud, you should definitively have a look into it. When the Openstack project started, I immediately was interested. I started packaging the Cactus release (that is, the 3rd version of Openstack), created the Alioth group (at the exact moment when Alioth was migrated to new hardware which added some fun so lucky I was!), and began to work on the Debian version (Openstack used to be an Ubuntu only project). After some success in integrating some Debian specific patches into the Ubuntu packages, I left it a bit aside, and skipped completely the Diablo release (which was never uploaded in Debian). Then, I worked a bit on the Essex release before the freeze of Wheezy last June, together with other DDs (big up to Julien, Loic, Ghe, Sileth!). I (re-)started serious packaging work for Openstack just right after Folsom (eg: version 2012.2) was released early last October. I literally worked days and nights on it, in order to provide more automation so that it could become more easy to install. Indeed, it used to be very complicated and painful, with lots of manual tasks to perform on the shell. It s hopefully a lot easier now, with most of the manual boring shell work replaced by debconf and scripts. But I m sure there s more that can be done still. After 4 months of effort, I finally pressed the red button and uploaded everything at once to Debian Experimental since Openstack Essex (version 2012.1) is in Wheezy, I can t of course upload to SID unless Wheezy is out -. This represent 32 sources packages in total (some of them are already uploaded and approved by the FTP masters: sorry to give you so much more work guys ), and 104 binary packages (and counting ). So this isn t exactly small And I already have some fixes for what s currently waiting in the NEW queue: CVE fixes, missing build-depends which we ve found using the Jenkins server of eNovance who sponsors the packaging work, etc. I probably will post again here to announce when Folsom is completely approved by the FTP masters and reaches Debian experimental. In the mean while, it s available on GPLHost non-official repositories (see the howto linked bellow). If you would like to test the latest Openstack release (called Folsom, or 2012.2, if you are following well ), you can read the quite verbose howto I wrote here: https://wiki.debian.org/OpenStackHowto/Folsom If you do test it (don t be afraid, it s not that hard ), I will be very happy to hear from you, and receive any feedback / critics you may have. Note that I would strongly recommend to use the Folsom release, rather than what is currently in Wheezy, for many reasons which would be too long to list in this blog post (let me still drop a few reasons here: less bugs, more automations and it s more easy to install). Now, Openstack is a constantly evolving software, with so many companies and developers involved. I don t think my work on this beast will ever be finished. But it doesn t mater, as it s been quite some fun, and I will enjoy to do more. The next release of Openstack is schedule for next April (they release at the same time as Ubuntu). I hope to be able to join the Portland summit, and see all the persons I have worked with over IRC and mailing list. This time, I hope to be able to release Openstack packages at the same time as the upstream source code is released (Debian packages have historically been lagging a few months behind). The pre-version is by the way already in Alioth. Last, a (private) message for our famous cheese-master: please wait until everything leaves the NEW queue before bugging everyone with translations. We probably should have a serious talk about how to make it less redundant, easier to translate, and probably find a way to avoid duplication of messages across all packages.

22 August 2012

Thomas Goirand: MiniUPnP now fully in Debian

IT S FINALLY IN! With this last upload of the MiniUPnP daemon 1.7-3, I ve reached a first milestone: all of the MiniUPnP libraries, daemons and clients are finally in Debian, in a shape which I am not ashamed anymore. What is UPnP for? Back in the days (around Linux 2.0.3x), the only way to share a single Internet connection was to run a Linux box as your gateway. These days are far gone, and nearly every home connection is done through a mini home router, which also often provides a small switch and a WiFi access point. Every computer on the LAN gets its connectivity through the router thanks to NAT, and to be more accurate, masquerading. That works very well for outbound connections (browse the web, send a mail, etc.), but it is then a lot harder that something connects to a server located in the LAN. If you run a permanent server with a fixed LAN IP address, that s fine, you just configure your router to do some port forwarding, so that any inbound connection will be forwarded to the computer of your choice in the LAN. But if one application needs to listen on the public IP address just for a while (for example, IRC using DCC connection), so that something can connect to it, it becomes a way more tricky. That s when UPnP comes into play. UPnP stands for Universal Plug and Play. It s a protocol so that any application on the LAN can open ports on the public IP address of the gateway. Note that an UPnP router is also often referenced as UPnP IGD (Internet Gateway Device). Why using MiniUPnP, don t we have already linux-igd? Before my packaging work, the only UPnP server that was available in Debian was linux-igd, which has a dependency on libupnpX. That library is quite big for what it does (about 500KiB, on top of which you have to install the 200KiB daemon itself), while MiniUPnP is designed to be lightweight (only 158K, with dependencies on iptables, iproute, uuid-runtime and libc6 only). That seems ridiculous, but when you have only 8MiB of flash in your old router, saving few hundreds KiB is really important, and here, we re saving more than 500 KiB, with a full implementation of the UPnP protocol, and the Microsoft NATPmP implemented as well (yes, Microsoft decided that one protocol wasn t enough, and that they should implement their own (sic!)). Who is using MiniUPNP? Probably, you already have the libminiupnpc installed in your SID/wheezy desktop, because the transmission bittorent client which is installed with Gnome by default, depends on it, and minissdpd might be in your desktop too, if you installed the Recommends: packages as well. Which is why these are scoring quite high already in popcon. Previously, transmission was embedding its own version of the MiniUPnP client library, so it s nice to have it as a standalone shared library now. Also, because of its very light weight, and the fact that UPnP is often needed for gaming (both XBOX 360 and PSP need, and can use, the MiniUPnP daemon that runs on your IGD), many ISP who ship a router to their customer, install MiniUPnP in it. For example, at least, the 2nd and 3rd largest Internet access provider in France (Free Telecom and SFR) are shipping MiniUPnPd in their box (one of them using OpenWRT on their router). So probably as well, you are also using MiniUPnP daemon without knowing it. But isn t UPnP unsafe? To this question, I d answer that connecting to Internet is unsafe. It isn t less safe than connecting to Internet on a WiFi hotspot for example. So adding UPnP support on your gateway doesn t make it less safe, especially since MiniUPnP has some features to help with security (like an ACL with IP and port lists, or a secure mode were the router will only open ports to the client requesting it). Also, we are running a distribution where dependencies are easy to check, so it s easy for you to check if a client uses UPnP or not, and decide to trust it (or not). I personally have no issue trusting transmission, x-chat or warzone2010 (more exactly: I don t trust that incoming connectivity will make it less safe for these applications, there are so many other things that could go wrong). MiniUPnP consists of what? MiniUPnP is a set of 4 source packages to handle all of your UPnP needs: - The client library, libminiupnpc (and the corresponding -dev package), with its sidekick miniupnpc client program (you can use it to check the public IP of your gateway, or manually open ports). - The UPnP IGD daemon, miniupnpd, which just reached Debian Experimental. The latest version, 1.7-3, also includes debconf configuration so that you can choose the NIC and IP address to bind on. If you installed previous version, I strongly recommend upgrading to fix previous problems. - The libnatpmp client library (and the -dev package) for implementing the Microsoft protocol as a client (it also has a client package binary for your tests). - The minissdpd daemon to keep memory of all UPnP devices that announced themselves, and speed-up using the UPnP protocol (used by the MiniUPnP client library). A long process to have MiniUPnP in Debian All these 4 pieces of the puzzle were not as easy as it seems to be packaged in Debian. Especially, the IGD daemon, MiniUPnPd, used to depend on some header files that were not packaged in Debian. This was a decision of the iptables maintainers, which even after a long discussion, refused to add the necessary headers because it wasn t supposed to be a kernel API. While this might really be truth, the issue was that there was no public API available in the Linux kernel at all, and other distributions (Fedora, Gentoo, etc.) really did supply these needed files. As a consequence, during a very long time (years literally), shamefully, it was impossible to build the MiniUPnP daemon in Debian, while it was no problem on other distributions. Note that I do not want to start a troll discussion here, I m just stating facts, both parties have very strong and respectable arguments. Anyway, this needed API was finally embedded in MiniUPnPd itself thanks to a contributor to the project, and now it builds on both the Squeeze and the Wheezy (that is Linux 3.2) kernel without any issue. Call for testing Unfortunately, I don t have a Linux box as my Internet gateway. So I did the tests that I could: I installed MiniUPnPd on one computer on my LAN, and checked that I could open ports from another computer. So it seems to work, but I had no way to really test it live , with some real clients. So if you have an Debian SID box as your Internet gateway, feedback would be highly appreciated. What is remaining to do? A transition from libminiupnpc5 to libminiupnpc8 has to happen. Of course, this is too late for Wheezy, so this will happen right after the release. There s not so many reverse dependencies, so it should be ok, but I (yet) have zero experience with transitions. BTW, I had hard time to find out what this transition process was, and maybe some more effort for documenting it would help others (I m thinking about adding some text in the Debian wiki for that). Also, I have called for help on debian-devel, because I m currently the only one maintaining all these 4 source packages, and I m trying to have all my packages checked by others, and team maintained. So if you want to give a hand, or just declare yourself as a backup if I m busy, please raise your hand! Upstream is a friend of mine, and is very reactive to any requests (it very rarely took more than 48 hours to get a reply). Enjoy MiniUPnP and report bugs! Again, feedback would be appreciated. :)

16 August 2012

Thomas Goirand: Debian 19th birthday at SHLUG Hacking Thursday

We were nearly 20 people from the Shanghai Linux User Group this evening, at JA cafe (near JinAn temple), celebrating Debian s 19th birthday. As the only DD present at that event (Li Daobing was busy somewhere else), I was honored by my friends to be the one cutting the cake. Above the (very nice) cake before sharing it, below, a few SHLUG members present that evening.

30 May 2012

Thomas Goirand: New release of MLMMJ (version 1.2.18.0) uploaded in Debian

MLMMJ stands for Mailing List Manager Made Joyful. To me, it s the best mailing list manager available. Not only its written in C (no slow interpreted language here ), but also it is easy and very convenient to setup. No hack is needed to make it run on a multi-domain environment, and no need to run a special subdomain for your lists (yes, I m thinking about you, silly Mailman ). If you didn t know, MLMMJ is used to handle relatively high traffic lists: all mailing lists for Gentoo are using MLMMJ for example. MLMMJ 1.2.18.0 has just been release upstream, few days ago (on the 29 of may). It s with a great pleasure that I packaged (and uploaded to SID) the latest version, and wrote the changelog, where I wrote that 6 Debian specific patches (out of 8) could be removed! One of the funny changes are the renaming of the mlmmj-recieve into mlmmj-receive as it always should have been (with a symlink to keep backward compatibility), and the removal of the .sh extension to the mlmmj-make-ml (which was patched in Debian to be kept policy compliant: that s one of the removed patches ). Even if I did a few quick functional tests before uploading, I d be happy to get some feedback before Wheezy is out, so please test and eventually bug-report!

21 May 2012

Thomas Goirand: Unit tests for PHP: PHPUnit

PHPUnit, according to its PTS, has been in Debian since 2009. But it was orphaned, and nobody took care of it for a while. That is until few weeks ago, Luis Uribe started to work on it. Since he isn t a DD, and that I take care of, I believe, about half of the all the PHP PEAR packages in Debian, I started working with him on re-doing the work of packaging PHPUnit for Debian. Previously, the old version was quite wrong, with missing dependencies, and not really working, what a shame PHPUnit 3.6 has been in Debian SID for 3 days now. And it s working well. I m now adding runs of unit testings from upstream packages at build time (of course, only if DEB_BUILD_OPTIONS doesn t contains nocheck , as per the policy). All together, it s been quite some fun to hack this, and I m quite happy of the results, even though there s still a lot of work remaining. PHP itself is running unit tests at build time. And not just a few: more than 11000 of them. The only small issue, is that they are totally outdated. Over the time, the vardump() function has evolved, and doesn t print things the same way. One may say that it prints things in a nicer way, but as a result, many of the tests that were suppose to work, actually fails because of these differences in vardump() over time. So I started working on fixing some of the tests. It s most of the time very easy to fix, but it takes a long time to fix all these small unit tests. So far, I ve been able to fix most of what s in the tests/, Zend/tests (more than 161 tests fixed), and the beginning of ext/*/tests: for the moment: bcmath, bz2, calendar, curl, date, dba, dom, ereg, and a part of exif, which for the moment represent 214 fixed tests, and I ll try to do more fixes when I have time. I hope to send the patches upstream soon. The final goal is of course to have the build of PHP to fail if unit tests are failing as well. For the moment, unit tests do run at build time, but the build don t care of the results, which I think is stupid (it s wasting CPU cycles for no reasons, IMO). If you maintain some PEAR packages, I would welcome you to first join the PKG-PEAR team on Alioth, and team maintain your packages, switch over to pkg-php-tools and dh 8 auto-sequencer, and to follow the PKG-PHP-PEAR packaging guide lines so that we have consistency in the archive. And of course, run unit testings, by doing a build-depends on phpunit (>= 3.6). Note that unit testings in PEAR packages are tracked on the Debian wiki. This post is also a call for help, because I feel quite alone doing this work of packaging PEAR packages, which many PHP applications depend on (think about roundcube, horde, extplorer, and many more). Other teams are doing very well, like the Perl team, and there s no reason why Debian wouldn t maintain PHP libraries as well as the ones for Perl.

1 February 2012

Thomas Goirand: Uploading a new Git repo to Alioth

Over the years, I ve always uploaded my local git repo (git clone bare myrep repo.git) as a tarball, which I then uncompress. But I ve been tired of doing that by hand, so I wrote a tiny shell script for it. Nothing fancy. You just do: alioth-new-git openstack then it uploads your current repo to Alioth under /git/openstack, for example. It s a really stupid script, please don t point me to gbp-create-remote-repo, I know it exists, but I prefer my own tiny thing. Here s the (lame) script:
#!/bin/sh
set -e
CWD= pwd 
PKG_NAME= basename $ CWD 
usage ()  
echo "$0 creates a new Git repository out of your current working tree on alioth."
echo "You must be at the root of that local git repository"
echo "$0 will use the current folder as the name for the Alioth git repository."
echo ""
echo "Usage: $0 <destination-project-path-on-alioth>"
echo "example: $0 openstack will create a /git/openstack/$ PKG_NAME .git repository"
echo "note that you will need to have write access on the destination project,"
echo "which means you must be a member of that said project on Alioth."
echo ""
echo "Please send patch/comments to: Thomas Goirand <zigo@debian.org>"
exit 1
 
if [ $# != 1 ] ; then
usage
fi
DEST_PROJECT=$1
# Create the tarball and upload it to Alioth
cd ..
echo "===> Cloning $ PKG_NAME  as bare: $ PKG_NAME .git"
git clone --bare $ PKG_NAME  $ PKG_NAME .git
echo "===> Building tarball: $ PKG_NAME .git.tar.gz"
tar -czf $ PKG_NAME .git.tar.gz $ PKG_NAME .git
echo "===> Uploading $ PKG_NAME .git.tar.gz to vasks.debian.org"
scp $ PKG_NAME .git.tar.gz vasks.debian.org:
# Uncompress it on Alioth, fix perms and hook
# note that the below block should be put back in a single
# line, but has been broken into multiple lines for readability
# on this blog
ssh vasks.debian.org "cd /git/$ DEST_PROJECT  &&
echo '===> Uncompressing $ PKG_NAME .git.tar.gz in /git/$ DEST_PROJECT ' &&
tar -xzf ~/$ PKG_NAME .git.tar.gz &&
echo '===> Activating update-server-info hook' &&
mv $ PKG_NAME .git/hooks/post-update.sample $ PKG_NAME .git/hooks/post-update &&
cd /git/$ DEST_PROJECT /$ PKG_NAME .git &&
git --bare update-server-info &&
echo '===> Deleting tarball on alioth' &&
rm ~/$ PKG_NAME .git.tar.gz &&
echo '===> Fxing g+w unix permissions' &&
find /git/$ DEST_PROJECT /$ PKG_NAME .git -exec chmod g+w   \\;"
# Clean localy created files
echo "===> Cleaning local bare copy and tarball"
rm $ PKG_NAME .git.tar.gz
rm -rf $ PKG_NAME .git

Next.

Previous.